perm filename FR[S81,JMC] blob
sn#579438 filedate 1981-04-13 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00005 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .S Basic Research in Artificial Intelligence and Formal Reasoning
C00015 00003 .ss First Order Logic
C00023 00004 .ss Mathematical Theory of Program Testing
C00026 00005 .ss Proofs as Descriptions of Algorithms
C00035 ENDMK
C⊗;
.S Basic Research in Artificial Intelligence and Formal Reasoning
//pers John#McCarthy, Richard#Weyhrauch, Lewis#Creary, Luigia#Aiello,
Jussi#Ketonen, Ma#Xiwen Jon#Doyle sra Christopher#Goad, Carolyn#Talcott,
Ben#Moszkowski
Applied research requires basic research to replenish the stock of ideas
on which its progress depends.
The long range goals of work in basic AI and formal reasoning are
to make computers carry out the
reasoning required to solve problems.
Our work over the past few years has made it considerably clearer how our
formal approach can be applied to AI and MTC problems.
This brings application nearer, especially in the area of mathematical
theory of computation.
.ss Formal Reasoning
John McCarthy continued his work in formalization
of the common sense facts about the world needed for intelligent
behavior and also work in formalization of computer programs.
This work led to the following reports and publications:
1. Two papers with R.S. Cartwright on the representation of recursive
programs in first order logic were published in POPL conferences [2, 3].
2. In Spring 1979 McCarthy discovered
a formalism for representing sequential programs
in first order logic now called Elephant - it never forgets.
Unlike previous formalisms, Elephant uses time explicitly as
an independent variable and a program counter as an explicit
function of time. As a result, properties of programs
can be proved directly from the program and the axiomatization
of the data domain. Besides that, Elephant programs can refer
directly to past inputs and events - sometimes obviating the
need for the programmer to define certain data structures for
remembering these events; the data structures are then supplied
by the compiler. In this one respect
Elephant is a higher level language
then any developed so far.
3. Work continued on formalisms for representing knowledge.
Ma Xiwen revised McCarthy's set of axioms for representing the
rather difficult knowledge puzzle of Mr. S and Mr. P and used
FOL to verify the translation of the knowledge problem to
a purely arithmetic problem.
This work is part of a study of the representation of facts about
knowledge that will aid the development of programs that
know what people and other programs do and don't know.
4. McCarthy further developed his method called circumscription of
non-monotonic reasoning and related it to the methods developed by
Raymond Reiter and by Drew McDermott and Jon Doyle jointly.
It turns out that humans do non-monotonic reasoning all the time,
and machines must do it in order to behave intelligently. In
our opinion, the formalization of non-monotonic reasoning is
a major theoretical step preparing for practical AI systems
that can avoid excessively specialized formulations of their
knowledge of the world.
A conference on non-monotonic reasoning was held at Stanford
in November 1978, and its proceedings are being published as a
special issue of the journal %2Artificial Intelligence%1.
Carolyn Talcott continued her studies in preparation
for writing a dissertation that will combine the methods of recursive
function theory with the programming techniques of LISP.
Hopefully, it will permit the formulation and proof
of more complex facts about more complex programs.
Lewis Creary continued his work on the epistemology of artificial
intelligence, and on the design of an advice-taking problem solver.
On the epistemological side, new ideas were developed for dealing with:
.begin indent 0,3; preface 0; nojust
a) causal reasoning and the frame and qualification problems,
b) the formal representation of actions,
c) epistemic feasibility of actions and the need to plan for the
availability of knowledge, and
d) the structuring, classification, and use of domain-specific
knowledge within a general problem solving system.
.end
Work on the advice-taking problem solver was concentrated for the most
part on high-level design issues and associated theoretical questions.
A tentative high-level design and long term plan were constructed,
incorporating the epistemological ideas mentioned above, as well as new
theoretical ideas concerning:
.begin indent 0,3; preface 0; nojust
e) problematic goal seeking, goal refinement, and achievement planning
f) advice seeking and taking as a means of program development and selective
knowledge acquisition,
g) complex motivational structures for use in problem solving.
.end
.cb References
.bib;
⊗Cartwright, R.S.,
"A Practical Formal Semantic Definition and Verification System
for Typed Lisp",
<Ph.D. Thesis>, Computer Science Department, Stanford University,
Stanford, California, 1976.
⊗Cartwright, Robert and John McCarthy, "Recursive Programs
as Functions in a First Order Theory", in E. Blum and S. Takasu (eds.),
<Proceedings of the International
Conference on Mathematical Studies of Information Processing>, Springer
Publishers, 1978.
⊗Cartwright, Robert and John McCarthy,
"First Order Programming Logic",
<Proc. 6th Annual ACM Symp. on Principles of Programming Languages>,
San Antonio, Texas, January 1979.
.end
.ss First Order Logic
During the past year FOL group efforts have been devoted mainly to the
following activities: 1) writing and giving talks about projects;
2) doing many new experiments, some of which
are described below; 3) planning for the next version
of FOL and beginning its implementation (this has taken the form of a new
implementation of LISP which is described below); 4) using the
Prawitz normalization theorem to automatically specialize bin packing
algorithms; 5) the invention (by Ketonen and Weyhrauch) of a new decision
procedure for a subset of predicate logic.
1) Publications are listed below. The bulk of the other writing
consists of a set of lecture notes by Richard Weyhrauch which he
hopes will become a monograph on FOL and is presently about 125 pages
long.
2) Weyhrauch is preparing a paper that contains many diverse examples uses
of FOL to express many currently interesting problems discussed by AI
people. These include non-monotonic logic, morning star/evening star,
meta-theory, Dynamic introduction of new reasoning principles, reasoning
about belief, Procedural vs. Declatative representation, Not vs. Thnot,
and reasoning about inconsistencies. These examples will demonstrate the
breath of the FOL epistemology. In addition Luigia Aiello did extensive
work demonstrating how metatheory can be used to represent elementary
algebra in a reasonable way. This work will be presented at this years
automatic deduction workshop.
3) Planning for the next version of FOL has taken several directions. The
first amounts to discussions about what the underlying logic of the next
version should look like. The second involves how to integrate the work
of Chris Goad involving normalization into the new FOL system. The
third is what kind of programming language should FOL be written in. The
most important issue here is how we intend to implement the FOL evaluator
in such a way that the FOL code becomes self axiomatizing and we can
use the FOL evaluator itself as our basic interpreter. This discussion
is central to Carolyn Talcott's thesis. This has resulted in Weyhrauch
producing a new assembly language programmed version of LISP which is
designed to be theoretically smooth enough that an axiomatization of
its properties can be carried out by Carolyn Talcott.
4) The work of producing a computationally efficient implementation of
normalization was completed early in the year and a series of experiments
whose purpose is to assess the practical applicability of proofs as
descriptions of computation was carried out. Chris Goad has chosen a
class of bin packing problems as a test domain. The work involved both
building up a collection of proofs which established some computationally
useful facts about the domain, and investigating the computational
efficiency aspects of these proofs. This work has resulted in Chris
finishing his Ph. D. thesis which he is currently writing up.
5) Ketonen worked out details and greatly extended an idea of Weyhrauch's
about the possibility of a new decision procedure for a subset of the
predicate calculus they are now calling the direct part of first order
logic. This algorithm is currently being coded by Ketonen.
.cb Publications
.bib;
⊗L. Aiello and G. Prini, "An efficient interpreter for the LAMBDA calculus",
to be published in <Journal of Computer and System Sciences>, 1979.
⊗G. Prini, "Applicative parallelism", submitted for publication in <Journal of
the ACM>, 1979.
⊗L. Aiello, and G. Prini, "Design of a personal computing system", to be
published in <Proceeedings of the Annual Conference of the Canadian
Information Processing Society>, Victoria (British Columbia), May 12-14,
1980.
⊗L. Aiello, and G. Prini, Operators in LISP: some remarks on Iverson's
"Operators", to be published in <ACM Transactions on Programming Languages
and Systems>, 1980.
⊗G. Prini, "Explicit parallelism in LISP-like languages", submitted to 1980
LISP Conference, Stanford University, 24-27 August 1980.
⊗L. Aiello, "Evaluating functions defined in first-order logic" (tentative
title), submitted to Logic Programming Conference, Budapest, July
1980.
⊗L. Aiello, R. Weyhrauch, "Using meta-theoretic Reasoning to do Algebra",
<Proc. Automatic Deduction Conference>, Les Arcs (France), July 8-11,
1980.
⊗R. Weyhrauch, "Prolegomena to a theory of meechanized formal reasoning",
To appear in <AI Journal> special issue on non-monotonic logic.
⊗C. Goad, "Proofs as descriptions of computations", <Proc. Automatic
Deduction Conference>, Les Arcs (France), July 8-11, 1980.
.end
.ss Mathematical Theory of Program Testing
This thesis by Martin Brooks describes how testing can be used to show a
program's correctness.
The scenario for using testing to show correctness is that the programmer
writes a program P which is either correct, or which differs
from the correct program by one or more erors in a specified class E of errors.
The theory relates:
.begin indent 0,4; nojust; preface 0;
(1) P's structure.
(2) The combinatorics of how errors in E could have led to P (incorrectly)
being written.
(3) The type of program behaviour that the programmer will observe when
testing, for example the program's output, or a trace.
(4) The characterisitics of "complete" test data, which is guaranteed to reveal
the presence of errors in E, assuming all errors come from E.
.end
The theory is developed at a general level and is then specialized
to the case of recursive programs having only simple errors, where the
programmer observes a trace of each test run.
A method is devised for generating
test data, based on this application of the theory.
The generated test data is "complete",
meaning that if the only errors in the recursive program are of the specified types,
then the test data will
reveal the errors through at least one incorrect trace.
Viewed contrapositively,
assuming that the written program P is "almost correct", that is, differs from
the correct program at most by errors of the specified simple types E,
if the trace of each
test run is as the programmer intended, then the program must be correct.
The test data generation method has been implemented (in MacLisp) and examples
of its operation are given.
.cb Reference
.bib
⊗Martin Brooks, "Determining Correctness by testing", AIM-336, June 1980.
.end
.ss Proofs as Descriptions of Algorithms
Chris Goad is currently completing his Ph.D. thesis, titled "Computational
uses of the manipulation of formal proofs". The thesis concerns the use
of proofs as descriptions of algorithms. In particular, a proof in an
appropriate system of a formula of the form "for all x there exists a y
such that R(x,y)" can be used express an algorithm A which satisfies the
"specification" R in the sense that, for each input n, R(n,A(n)) holds.
The proof can be "executed" by use of any of several methods derived from
proof theory, for example, by a proof normalization procedure, or by
extraction of "code" from the proof by means of one of the functional
interpretations.
A proof differs from a conventional program expressing the same algorithm
in that it formalizes more information about the algorithm than does the
program. This additional information expands the class of transformations
on the algorithm which are amenable to automation. For example, there is
a class of "pruning" transformations which improve the computational
efficiency of a natural deduction proof by removing unneeded case
analyses. These transformations make essential use of dependency
information which finds formal expression in a natural deduction proof,
but not in a conventional program. Pruning is particularly useful in
adapting general purpose algorithms to special situations.
Goad has developed efficient methods for the execution and transformation
of proofs, and has implemented these methods on the Stanford Artificial
Intelligence Laboratory's PDP-10 computer. He has done a series of
experiments on the specialization of a bin-packing algorithm. In these
experiments, substantial increases of efficiency were obtained by use of
the additional "logical" information contained in proofs.
The general objective of Goad's work has been the development of improved
technology for manipulating methods of computation. In his thesis, he has
exhibited new automatic operations on algorithms - operations which are
made possible by the use of formal proofs to describe computation.
.cb Reference
.bib;
⊗C. Goad, "Proofs as descriptions of computations", <Proc. Automatic
Deduction Conference>, Les Arcs (France), July 8-11, 1980.
.end